Goto

Collaborating Authors

 golden ratio


The Mysterious Math Behind the Brazilian Butt Lift

WIRED

For years, plastic surgeons thought the proportions of a beautiful buttocks should follow the Fibonacci sequence. Now, people are looking for a more Kardashian shape. In the history of gluteal enhancement, Mexico City stands out. It was here, in 1979, that a plastic surgeon, Mario González-Ulloa, first installed a pair of silicone implants designed specifically for the buttocks. The textbook calls González-Ulloa the "grandfather of buttock augmentation." The early 2000s saw a new generation of Mexico City buttock transformation luminaries, notably Ramón Cuenca-Guerra. Cuenca-Guerra laid out four characteristics that "determine attractive buttocks" as well as the five types of "defects," with strategies for correcting each one. I, for instance, have defect type 5, the "senile buttock." While I understand the value of standardizing procedures and setting guidelines for surgical practice, I tripped over Cuenca-Guerra's methodology. How and by whom had the determinants been determined?


Golden Ratio Weighting Prevents Model Collapse

He, Hengzhi, Xu, Shirong, Cheng, Guang

arXiv.org Machine Learning

Recent studies identified an intriguing phenomenon in recursive generative model training known as model collapse, where models trained on data generated by previous models exhibit severe performance degradation. Addressing this issue and developing more effective training strategies have become central challenges in generative model research. In this paper, we investigate this phenomenon theoretically within a novel framework, where generative models are iteratively trained on a combination of newly collected real data and synthetic data from the previous training step. To develop an optimal training strategy for integrating real and synthetic data, we evaluate the performance of a weighted training scheme in various scenarios, including Gaussian distribution estimation and linear regression. We theoretically characterize the impact of the mixing proportion and weighting scheme of synthetic data on the final model's performance. Our key finding is that, across different settings, the optimal weighting scheme under different proportions of synthetic data asymptotically follows a unified expression, revealing a fundamental trade-off between leveraging synthetic data and generative model performance. Notably, in some cases, the optimal weight assigned to real data corresponds to the reciprocal of the golden ratio. Finally, we validate our theoretical results on extensive simulated datasets and a real tabular dataset.


Leonardo vindicated: Pythagorean trees for minimal reconstruction of the natural branching structures

Ruta, Dymitr, Mio, Corrado, Damiani, Ernesto

arXiv.org Artificial Intelligence

Trees continue to fascinate with their natural beauty and as engineering masterpieces optimal with respect to several independent criteria. Pythagorean tree is a well-known fractal design that realistically mimics the natural tree branching structures. We study various types of Pythagorean-like fractal trees with different shapes of the base, branching angles and relaxed scales in an attempt to identify and explain which variants are the closest match to the branching structures commonly observed in the natural world. Pursuing simultaneously the realism and minimalism of the fractal tree model, we have developed a flexibly parameterised and fast algorithm to grow and visually examine deep Pythagorean-inspired fractal trees with the capability to orderly over- or underestimate the Leonardo da Vinci's tree branching rule as well as control various imbalances and branching angles. We tested the realism of the generated fractal tree images by means of the classification accuracy of detecting natural tree with the transfer-trained deep Convolutional Neural Networks (CNNs). Having empirically established the parameters of the fractal trees that maximize the CNN's natural tree class classification accuracy we have translated them back to the scales and angles of branches and came to the interesting conclusions that support the da Vinci branching rule and golden ratio based scaling for both the shape of the branch and imbalance between the child branches, and claim the flexibly parameterized fractal trees can be used to generate artificial examples to train robust detectors of different species of trees.


Double-Bayesian Learning

Jaeger, Stefan

arXiv.org Artificial Intelligence

Contemporary machine learning methods will try to approach the Bayes error, as it is the lowest possible error any model can achieve. This paper postulates that any decision is composed of not one but two Bayesian decisions and that decision-making is, therefore, a double-Bayesian process. The paper shows how this duality implies intrinsic uncertainty in decisions and how it incorporates explainability. The proposed approach understands that Bayesian learning is tantamount to finding a base for a logarithmic function measuring uncertainty, with solutions being fixed points. Furthermore, following this approach, the golden ratio describes possible solutions satisfying Bayes' theorem. The double-Bayesian framework suggests using a learning rate and momentum weight with values similar to those used in the literature to train neural networks with stochastic gradient descent.


This A.I. knows who you find attractive better than you do

#artificialintelligence

When it comes to earning social currency, being attractive is as good as gold. A team of scientists from Finland has now designed a machine learning algorithm that can plumb the depths of these subjective judgments better than we can and can accurately predict who we find attractive via our unique brainwaves -- and even generate a unique portrait that captures these qualities -- with 83 percent accuracy. Far beyond just the laws of attraction, this novel brain-computer interface (BCI) could push wide-open a new era of BCI that can bring our unvoiced desires to life. The research was published this February in the journal IEEE Transactions on Affective Computing. The hallmarks of attraction may change over time (from twisted mustaches and monocles to a clean shave and aviators), but regardless, top-tier social status can not only give your love life a boost but can even help you score the big promotion or easily slide into the good graces of the powerful elite. But while the societal effects of being deemed attractive are numerous, the mechanism behind these personal preferences are still often shrouded in shadow.


The Golden Ratio of Learning and Momentum

Jaeger, Stefan

arXiv.org Machine Learning

Gradient descent has been a central training principle for artificial neural networks from the early beginnings to today's deep learning networks. The most common implementation is the backpropagation algorithm for training feed-forward neural networks in a supervised fashion. Backpropagation involves computing the gradient of a loss function, with respect to the weights of the network, to update the weights and thus minimize loss. Although the mean square error is often used as a loss function, the general stochastic gradient descent principle does not immediately connect with a specific loss function. Another drawback of backpropagation has been the search for optimal values of two important training parameters, learning rate and momentum weight, which are determined empirically in most systems. The learning rate specifies the step size towards a minimum of the loss function when following the gradient, while the momentum weight considers previous weight changes when updating current weights. Using both parameters in conjunction with each other is generally accepted as a means to improving training, although their specific values do not follow immediately from standard backpropagation theory. This paper proposes a new information-theoretical loss function motivated by neural signal processing in a synapse. The new loss function implies a specific learning rate and momentum weight, leading to empirical parameters often used in practice. The proposed framework also provides a more formal explanation of the momentum term and its smoothing effect on the training process. All results taken together show that loss, learning rate, and momentum are closely connected. To support these theoretical findings, experiments for handwritten digit recognition show the practical usefulness of the proposed loss function and training parameters.


Impossible Cookware and Other Triumphs of the Penrose Tile - Issue 69: Patterns

Nautilus

In 1974, Roger Penrose, a British mathematician, created a revolutionary set of tiles that could be used to cover an infinite plane in a pattern that never repeats. In 1982, Daniel Shechtman, an Israeli crystallographer, discovered a metallic alloy whose atoms were organized unlike anything ever observed in materials science. Penrose garnered public renown on a scale rarely seen in mathematics. Shechtman won the Nobel Prize. Both scientists defied human intuition and changed our basic understanding of nature's design, revealing how infinite variation could emerge within a highly ordered environment. At the heart of their breakthroughs is "forbidden symmetry," so-called because it flies in the face of a deeply ingrained association between symmetry and repetition.